new physics
New Expansion Rate Anomalies at Characteristic Redshifts Geometrically Determined using DESI-DR2 BAO and DES-SN5YR Observations
Mukherjee, Purba, Sen, Anjan A
We perform a model-independent reconstruction of the cosmic distances using the Multi-Task Gaussian Process (MTGP) framework as well as knot-based spline techniques with DESI-DR2 BAO and DES-SN5YR datasets. We calibrate the comoving sound horizon at the baryon drag epoch $r_d$ to the Planck value, ensuring consistency with early-universe physics. With the reconstructed cosmic distances and their derivatives, we obtain seven characteristic redshifts in the range $0.3 \leq z \leq 1.7$. We derive the normalized expansion rate of the Universe $E(z)$ at these redshifts. Our findings reveal significant deviations of approximately $4$ to $5σ$ from the Planck 2018 $Λ$CDM predictions, particularly pronounced in the redshift range $z \sim 0.35-0.55$. These anomalies are consistently observed across both reconstruction methods and combined datasets, indicating robust late-time tensions in the expansion rate of the Universe and which are distinct from the existing "Hubble Tension". This could signal new physics beyond the standard cosmological framework at this redshift range. Our findings underscore the role of characteristic redshifts as sensitive indicators of expansion rate anomalies and motivate further scrutiny with forthcoming datasets from DESI-5YR BAO, Euclid, and LSST. These future surveys will tighten constraints and will confirm whether these late-time anomalies arise from new fundamental physics or unresolved systematics in the data.
Agents of Discovery
Diefenbacher, Sascha, Hallin, Anna, Kasieczka, Gregor, Krämer, Michael, Lauscher, Anne, Lukas, Tim
The substantial data volumes encountered in modern particle physics and other domains of fundamental physics research allow (and require) the use of increasingly complex data analysis tools and workflows. While the use of machine learning (ML) tools for data analysis has recently proliferated, these tools are typically special-purpose algorithms that rely, for example, on encoded physics knowledge to reach optimal performance. In this work, we investigate a new and orthogonal direction: Using recent progress in large language models (LLMs) to create a team of agents -- instances of LLMs with specific subtasks -- that jointly solve data analysis-based research problems in a way similar to how a human researcher might: by creating code to operate standard tools and libraries (including ML systems) and by building on results of previous iterations. If successful, such agent-based systems could be deployed to automate routine analysis components to counteract the increasing complexity of modern tool chains. To investigate the capabilities of current-generation commercial LLMs, we consider the task of anomaly detection via the publicly available and highly-studied LHC Olympics dataset. Several current models by OpenAI (GPT-4o, o4-mini, GPT-4.1, and GPT-5) are investigated and their stability tested. Overall, we observe the capacity of the agent-based system to solve this data analysis problem. The best agent-created solutions mirror the performance of human state-of-the-art results.
- Europe > Austria > Vienna (0.14)
- Europe > Germany > Hamburg (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Strengthening Anomaly Awareness
Banda, Adam, Khosa, Charanjit K., Sanz, Veronica
We present a refined version of the Anomaly Awareness framework for enhancing unsupervised anomaly detection. Our approach introduces minimal supervision into Variational Autoencoders (VAEs) through a two-stage training strategy: the model is first trained in an unsupervised manner on background data, and then fine-tuned using a small sample of labeled anomalies to encourage larger reconstruction errors for anomalous samples. We validate the method across diverse domains, including the MNIST dataset with synthetic anomalies, network intrusion data from the CICIDS benchmark, collider physics data from the LHCO2020 dataset, and simulated events from the Standard Model Effective Field Theory (SMEFT). The latter provides a realistic example of subtle kinematic deviations in Higgs boson production. In all cases, the model demonstrates improved sensitivity to unseen anomalies, achieving better separation between normal and anomalous samples. These results indicate that even limited anomaly information, when incorporated through targeted fine-tuning, can substantially improve the generalization and performance of unsupervised models for anomaly detection.
- Europe > United Kingdom (0.14)
- North America > United States (0.04)
- Europe > Spain > Valencian Community > Valencia Province > Valencia (0.04)
TRANSIT your events into a new mass: Fast background interpolation for weakly-supervised anomaly searches
Oleksiyuk, Ivan, Voloshynovskiy, Svyatoslav, Golling, Tobias
We introduce a new model for conditional and continuous data morphing called TRansport Adversarial Network for Smooth InTerpolation (TRANSIT). We apply it to create a background data template for weakly-supervised searches at the LHC. The method smoothly transforms sideband events to match signal region mass distributions. We demonstrate the performance of TRANSIT using the LHC Olympics R\&D dataset. The model captures non-linear mass correlations of features and produces a template that offers a competitive anomaly sensitivity compared to state-of-the-art transport-based template generators. Moreover, the computational training time required for TRANSIT is an order of magnitude lower than that of competing deep learning methods. This makes it ideal for analyses that iterate over many signal regions and signal models. Unlike generative models, which must learn a full probability density distribution, i.e., the correlations between all the variables, the proposed transport model only has to learn a smooth conditional shift of the distribution. This allows for a simpler, more efficient residual architecture, enabling mass uncorrelated features to pass the network unchanged while the mass correlated features are adjusted accordingly. Furthermore, we show that the latent space of the model provides a set of mass decorrelated features useful for anomaly detection without background sculpting.
- Information Technology > Data Science > Data Mining > Anomaly Detection (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.67)
Multiple testing for signal-agnostic searches of new physics with machine learning
In this work, we address the question of how to enhance signal-agnostic searches by leveraging multiple testing strategies. Specifically, we consider hypothesis tests relying on machine learning, where model selection can introduce a bias towards specific families of new physics signals. We show that it is beneficial to combine different tests, characterised by distinct choices of hyperparameters, and that performances comparable to the best available test are generally achieved while providing a more uniform response to various types of anomalies. Focusing on the New Physics Learning Machine, a methodology to perform a signal-agnostic likelihood-ratio test, we explore a number of approaches to multiple testing, such as combining p-values and aggregating test statistics.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Italy > Liguria > Genoa (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.51)
Read New Scientist's 5 best long reads of 2022 for free
To read our top 5 feature articles of 2022, click through to an article and follow the prompts to register with New Scientist for free. Some of our biggest-hitting stories this year asked mind-bending questions about physics, spoke to our readers about the issues facing their everyday lives or were in-depth exclusives uncovered by New Scientist staff. As a holiday gift to you, we have curated a selection of some of our best feature articles, from the latest anti-ageing research to hints of entirely new physics. These in-depth stories are usually only available to paid subscribers, but you will be able to read them for free between 25 December and the end of the year. Here is our pick of the best and why they made the cut. It might sound obvious to say that what you eat can make you live longer.
An AI Just Independently Discovered Alternate Physics
Grab any physics textbook and you'll find formula after formula describing how things wobble, fly, swerve and stop. The formulas describe actions we can observe, but behind each could be sets of factors that aren't immediately obvious. Now, a new AI program developed by researchers at Columbia University has seemingly discovered its own alternative physics. After being shown videos of physical phenomena on Earth, the AI didn't rediscover the current variables we use; instead, it actually came up with new variables to explain what it saw. To be clear, this doesn't mean our current physics are flawed or that there's a better fit model to explain the world around us. (Einstein's laws have proved incredibly robust.)
Learning new physics efficiently with nonparametric methods
Letizia, Marco, Losapio, Gianvito, Rando, Marco, Grosso, Gaia, Wulzer, Andrea, Pierini, Maurizio, Zanetti, Marco, Rosasco, Lorenzo
Experimental observations and convincing conceptual arguments indicate that the present understanding of fundamental physics is not complete. Our theoretical formulation of the fundamental laws of Nature, the Standard Model, has been predicting with extremely high precision an impressive amount of data collected at past and ongoing experiments. On the other hand, the Standard Model does not provide answer to a multitude of questions including the origin of the electroweak scale, the mass of neutrinos, the flavour structure in the quark, lepton and neutrino sectors, and is unable to account for observed phenomena like the origin and the composition of the dark matter of the baryon asymmetry in the Universe. Further, it does not provide a microscopic description of gravity. These considerations guarantee the existence of more fundamental laws of Nature waiting to be unveiled. In order to access these laws, we must search the experimental data for phenomena that depart from the Standard Model predictions. Currently, the most common searching strategy is to test the data for the presence of specific new physics models, one at the time. Each search is then optimized to be sensitive to the features specific of the considered new physics scenario. This approach is in general insensitive to sources of discrepancy that differ from those considered.
- Europe > Italy (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Switzerland > Geneva > Geneva (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.95)
- Information Technology > Data Science (0.94)
Online-compatible Unsupervised Non-resonant Anomaly Detection
Mikuni, Vinicius, Nachman, Benjamin, Shih, David
There is a growing need for anomaly detection methods that can broaden the search for new particles in a model-agnostic manner. Most proposals for new methods focus exclusively on signal sensitivity. However, it is not enough to select anomalous events - there must also be a strategy to provide context to the selected events. We propose the first complete strategy for unsupervised detection of non-resonant anomalies that includes both signal sensitivity and a data-driven method for background estimation. Our technique is built out of two simultaneously-trained autoencoders that are forced to be decorrelated from each other. This method can be deployed offline for non-resonant anomaly detection and is also the first complete online-compatible anomaly detection strategy. We show that our method achieves excellent performance on a variety of signals prepared for the ADC2021 data challenge.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
Autonomous machine learning boost for quantum sensors
Researchers in the UK have developed an autonomous machine learning algorithm that dramatically simplifies quantum systems. Researchers at the University of Bristol's Quantum Engineering Technology Labs (QETLabs) developed a new protocol to formulate and validate approximate models for quantum systems of interest. The Quantum Model Learning Agent (QMLA) algorithm works autonomously, designing and performing experiments on the targeted quantum system, with the resultant data being fed back into the algorithm. It proposes candidate Hamiltonian models to describe the target system, and distinguishes between them using statistical metrics, namely Bayes factors. The researchers were able to use the algorithm on a real-life quantum experiment involving defect centres in a diamond, a well-studied platform for quantum information processing and quantum sensing.
- Europe > United Kingdom (0.26)
- Europe > Austria > Vienna (0.06)